ai lifecycle
The SMART+ Framework for AI Systems
Kandikatla, Laxmiraju, Radeljic, Branislav
Artificial Intelligence (AI) systems are now an integral part of multiple industries. In clinical research, AI supports automated adverse event detection in clinical trials, patient eligibility screening for protocol enrollment, and data quality validation. Beyond healthcare, AI is transforming finance through real-time fraud detection, automated loan risk assessment, and algorithmic decision-making. Similarly, in manufacturing, AI enables predictive maintenance to reduce equipment downtime, enhances quality control through computer-vision inspection, and optimizes production workflows using real-time operational data. While these technologies enhance operational efficiency, they introduce new challenges regarding safety, accountability, and regulatory compliance. To address these concerns, we introduce the SMART+ Framework - a structured model built on the pillars of Safety, Monitoring, Accountability, Reliability, and Transparency, and further enhanced with Privacy & Security, Data Governance, Fairness & Bias, and Guardrails. SMART+ offers a practical, comprehensive approach to evaluating and governing AI systems across industries. This framework aligns with evolving mechanisms and regulatory guidance to integrate operational safeguards, oversight procedures, and strengthened privacy and governance controls. SMART+ demonstrates risk mitigation, trust-building, and compliance readiness. By enabling responsible AI adoption and ensuring auditability, SMART+ provides a robust foundation for effective AI governance in clinical research.
- North America > Canada > Quebec > Montreal (0.04)
- North America > United States > New Jersey > Middlesex County > Edison (0.04)
- Africa > Zambia > Southern Province > Choma (0.04)
- Research Report > Experimental Study (0.88)
- Research Report > New Finding (0.74)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.86)
Co-Producing AI: Toward an Augmented, Participatory Lifecycle
Mushkani, Rashid, Berard, Hugo, Ammar, Toumadher, Chatonnier, Cassandre, Koseki, Shin
Despite efforts to mitigate the inherent risks and biases of artificial intelligence (AI) algorithms, these algorithms can disproportionately impact culturally marginalized groups. A range of approaches has been proposed to address or reduce these risks, including the development of ethical guidelines and principles for responsible AI, as well as technical solutions that promote algorithmic fairness. Drawing on design justice, expansive learning theory, and recent empirical work on participatory AI, we argue that mitigating these harms requires a fundamental re-architecture of the AI production pipeline. This re-design should center co-production, diversity, equity, inclusion (DEI), and multidisciplinary collaboration. We introduce an augmented AI lifecycle consisting of five interconnected phases: co-framing, co-design, co-implementation, co-deployment, and co-maintenance. The lifecycle is informed by four multidisciplinary workshops and grounded in themes of distributed authority and iterative knowledge exchange. Finally, we relate the proposed lifecycle to several leading ethical frameworks and outline key research questions that remain for scaling participatory governance.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Asia > Middle East > Jordan (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (5 more...)
- Research Report > Experimental Study (0.54)
- Research Report > New Finding (0.34)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Government (1.00)
Design and Validation of a Responsible Artificial Intelligence-based System for the Referral of Diabetic Retinopathy Patients
Moya-Sánchez, E. Ulises, Sánchez-Perez, Abraham, Da Veiga, Raúl Nanclares, Zarate-Macías, Alejandro, Villareal, Edgar, Sánchez-Montes, Alejandro, Jauregui-Ulloa, Edtna, Moreno, Héctor, Cortés, Ulises
Diabetic Retinopathy (DR) is a leading cause of vision loss in working-age individuals. Early detection of DR can reduce the risk of vision loss by up to 95%, but a shortage of retinologists and challenges in timely examination complicate detection. Artificial Intelligence (AI) models using retinal fundus photographs (RFPs) offer a promising solution. However, adoption in clinical settings is hindered by low-quality data and biases that may lead AI systems to learn unintended features. To address these challenges, we developed RAIS-DR, a Responsible AI System for DR screening that incorporates ethical principles across the AI lifecycle. RAIS-DR integrates efficient convolutional models for preprocessing, quality assessment, and three specialized DR classification models. We evaluated RAIS-DR against the FDA-approved EyeArt system on a local dataset of 1,046 patients, unseen by both systems. RAIS-DR demonstrated significant improvements, with F1 scores increasing by 5-12%, accuracy by 6-19%, and specificity by 10-20%. Additionally, fairness metrics such as Disparate Impact and Equal Opportunity Difference indicated equitable performance across demographic subgroups, underscoring RAIS-DR's potential to reduce healthcare disparities. These results highlight RAIS-DR as a robust and ethically aligned solution for DR screening in clinical settings. The code, weights of RAIS-DR are available at https://gitlab.com/inteligencia-gubernamental-jalisco/jalisco-retinopathy with RAIL.
- North America > United States (0.89)
- Oceania > Australia (0.04)
- North America > Mexico > Jalisco > Guadalajara (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Health & Medicine > Therapeutic Area > Ophthalmology/Optometry (1.00)
- Health & Medicine > Therapeutic Area > Endocrinology > Diabetes (0.90)
- Government > Regional Government > North America Government > United States Government > FDA (0.89)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Vision (0.93)
TechOps: Technical Documentation Templates for the AI Act
Lucaj, Laura, Loosley, Alex, Jonsson, Hakan, Gasser, Urs, van der Smagt, Patrick
Operationalizing the EU AI Act requires clear technical documentation to ensure AI systems are transparent, traceable, and accountable. Existing documentation templates for AI systems do not fully cover the entire AI lifecycle while meeting the technical documentation requirements of the AI Act. This paper addresses those shortcomings by introducing open-source templates and examples for documenting data, models, and applications to provide sufficient documentation for certifying compliance with the AI Act. These templates track the system's status over the entire AI lifecycle, ensuring traceability, reproducibility, and compliance with the AI Act. They also promote discoverability and collaboration, reduce risks, and align with best practices in AI documentation and governance. The templates are evaluated and refined based on user feedback to enable insights into their usability and implementabil-ity. We then validate the approach on real-world scenarios, providing examples that further guide their implementation: the data template is followed to document a skin tones dataset created to support fairness evaluations of downstream computer vision models and human-centric applications; the model template is followed to document a neural network for segmenting human silhouettes in photos. The application template is tested on a system deployed for construction site safety using real-time video analytics and sensor data. Our results show that TechOps can serve as a practical tool to enable oversight for regulatory compliance and responsible AI development.
- South America > Brazil (0.04)
- Europe > Switzerland (0.04)
- South America > Argentina > Patagonia > Río Negro Province > Viedma (0.04)
- (7 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Natural Language (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.46)
Security-First AI: Foundations for Robust and Trustworthy Systems
The conversation around artificial intelligence (AI) often focuses on safety, transparency, accountability, alignment, and responsibility. However, AI security (i.e., the safeguarding of data, models, and pipelines from adversarial manipulation) underpins all of these efforts. This manuscript posits that AI security must be prioritized as a foundational layer. We present a hierarchical view of AI challenges, distinguishing security from safety, and argue for a security-first approach to enable trustworthy and resilient AI systems. We discuss core threat models, key attack vectors, and emerging defense mechanisms, concluding that a metric-driven approach to AI security is essential for robust AI safety, transparency, and accountability.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.68)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.69)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.68)
Meta-Sealing: A Revolutionizing Integrity Assurance Protocol for Transparent, Tamper-Proof, and Trustworthy AI System
Krishnamoorthy, Mahesh Vaijainthymala
However, this growth has also introduced new challenges in ensuring the integrity, traceability, and verifiability of AI systems throughout their lifecycle [1]. As AI increasingly influences critical decision-making processes, the need for robust mechanisms to guarantee the trustworthiness of these systems has become paramount. Traditional approaches to data integrity and system verification fall short when applied to the complex, often opaque nature of AI systems. The dynamic nature of AI models, the vast amounts of data they process, and the intricate relationships between different stages of their lifecycle demand a more comprehensive and AI-specific approach to integrity assurance. This paper introduces Meta-Sealing, a novel integrity protocol designed specifically for AI systems. Meta-Sealing provides a cryptographic framework for sealing and verifying each stage of the AI lifecycle, from data collection to model retirement. By doing so, it addresses critical needs in enterprise AI deployment, including: 1. Ensuring the integrity of training data and model artifacts 2. Creating verifiable audit trails of AI development and deployment processes 3. Enhancing the reproducibility of AI experiments and results 4. Facilitating compliance with emerging AI regulations and governance frameworks 5. Building trust in AI systems among stakeholders and end-users We present a detailed architecture for implementing Meta-Sealing, including core components, cryptographic operations, and integration strategies. Furthermore, we discuss performance optimizations and security considerations crucial for enterprise-grade deployments.
- North America > United States > Texas > Tarrant County > Fort Worth (0.04)
- North America > United States > Pennsylvania (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology > Security & Privacy (1.00)
- Law (0.88)
- Government > Regional Government > North America Government > United States Government (0.68)
Rolling in the deep of cognitive and AI biases
Vakali, Athena, Tantalaki, Nicoleta
Nowadays, we delegate many of our decisions to Artificial Intelligence (AI) that acts either in solo or as a human companion in decisions made to support several sensitive domains, like healthcare, financial services and law enforcement. AI systems, even carefully designed to be fair, are heavily criticized for delivering misjudged and discriminated outcomes against individuals and groups. Numerous work on AI algorithmic fairness is devoted on Machine Learning pipelines which address biases and quantify fairness under a pure computational view. However, the continuous unfair and unjust AI outcomes, indicate that there is urgent need to understand AI as a sociotechnical system, inseparable from the conditions in which it is designed, developed and deployed. Although, the synergy of humans and machines seems imperative to make AI work, the significant impact of human and societal factors on AI bias is currently overlooked. We address this critical issue by following a radical new methodology under which human cognitive biases become core entities in our AI fairness overview. Inspired by the cognitive science definition and taxonomy of human heuristics, we identify how harmful human actions influence the overall AI lifecycle, and reveal human to AI biases hidden pathways. We introduce a new mapping, which justifies the human heuristics to AI biases reflections and we detect relevant fairness intensities and inter-dependencies. We envision that this approach will contribute in revisiting AI fairness under deeper human-centric case studies, revealing hidden biases cause and effects.
- North America > United States > New York > New York County > New York City (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Greece > Central Macedonia > Thessaloniki (0.04)
- (5 more...)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area (0.68)
Mapping the Potential of Explainable AI for Fairness Along the AI Lifecycle
Deck, Luca, Schomäcker, Astrid, Speith, Timo, Schöffer, Jakob, Kästner, Lena, Kühl, Niklas
The widespread use of artificial intelligence (AI) systems across various domains is increasingly surfacing issues related to algorithmic fairness, especially in high-stakes scenarios. Thus, critical considerations of how fairness in AI systems might be improved -- and what measures are available to aid this process -- are overdue. Many researchers and policymakers see explainable AI (XAI) as a promising way to increase fairness in AI systems. However, there is a wide variety of XAI methods and fairness conceptions expressing different desiderata, and the precise connections between XAI and fairness remain largely nebulous. Besides, different measures to increase algorithmic fairness might be applicable at different points throughout an AI system's lifecycle. Yet, there currently is no coherent mapping of fairness desiderata along the AI lifecycle. In this paper, we we distill eight fairness desiderata, map them along the AI lifecycle, and discuss how XAI could help address each of them. We hope to provide orientation for practical applications and to inspire XAI research specifically focused on these fairness desiderata.
- Europe > Germany > Bavaria > Upper Franconia > Bayreuth (0.04)
- North America > United States > New Jersey > Middlesex County > Piscataway (0.04)
- North America > United States > Hawaii (0.04)
- (6 more...)
- Research Report (1.00)
- Overview (1.00)
- Law (1.00)
- Health & Medicine (0.93)
- Information Technology > Security & Privacy (0.67)
- Government (0.66)
AI Maintenance: A Robustness Perspective
In general, the performance of an AI model is Just like the indispensable role of cars in the evaluated in the average case, by comparing the modern world, AI-empowered technology, and model predictions on a set of data samples to their ML-based systems and algorithms are bringing ground-truth labels and then using the average revolutionary changes and far-reaching impacts prediction result as a performance metric, such on our life, society, and environment, if not as the top-1 classification accuracy measuring the happening already. As AI models are perceived fraction of correct model prediction on the mostlikely as a new "vehicle" to a better future, this article (top-1) class over a dataset. In contrast, the aims to stress the importance of formalizing and adversarial scenario evaluates the model performance practicing AI maintenance from the robustness in the worst case among all possible and perspective, by drawing analogies in the model plausible changes (often pre-specified) to the data development and deployment between car and AI. and AI model, by assuming a virtual adversary is Towards achieving trustworthiness and sustainability in place. Moreover, the unseen scenario evaluates for AI, this article is motivated by the following the model performance on new data samples that question: Cars require regular inspection, are drawn from a different data distribution than maintenance, and continuous status monitoring, the seen data samples during training (but not why should AI technology be any different?
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.69)
- Transportation > Ground > Road (0.68)
- Automobiles & Trucks (0.68)
UK government announces its proposals for regulating AI
On 18 July 2022, the United Kingdom (UK) government set out its new proposals for regulating the use of artificial intelligence (AI) technologies while promoting innovation, boosting public trust, and protecting data. The proposals reflect a less centralised and more risk-based approach than in the EU's draft AI Act. The proposals coincide with the introduction to Parliament of the Data Protection and Digital Information Bill, which includes measures to use AI responsibly while reducing compliance burdens on businesses to boost the economy. One of the key challenges of regulating the use of AI is undoubtedly the pace of technology advancement in this area. The UK government's current proposal is to set out the core characteristics of AI, whilst allowing regulators to set out more detailed definitions according to their specific sectors. The government is of the opinion that it should regulate the use of AI, not the technology itself.